feat: add MiniMax provider support with M2.7 as default#3581
feat: add MiniMax provider support with M2.7 as default#3581octo-patch wants to merge 2 commits intosimstudioai:mainfrom
Conversation
- Add MiniMax chat model provider with OpenAI-compatible API - Support MiniMax-M2.5 and MiniMax-M2.5-highspeed models (204K context) - Add MiniMaxIcon to icons component - Register provider in types, registry, utils, and models - Clamp temperature to (0, 1] range per MiniMax API constraints - Add unit tests for provider metadata and request execution
PR SummaryMedium Risk Overview Registers MiniMax across the system by extending Written by Cursor Bugbot for commit 1afdde0. This will update automatically on new commits. Configure here. |
|
The latest updates on your projects. Learn more about Vercel for GitHub. |
Greptile SummaryThis PR adds MiniMax as a new OpenAI-compatible LLM provider, following the established pattern used by Groq, DeepSeek, Cerebras, and others. The registration, icon, model definitions, and streaming plumbing are all consistent with existing providers. Two logic bugs in the tool-calling path of
Confidence Score: 2/5
Important Files Changed
Sequence DiagramsequenceDiagram
participant Executor
participant MiniMaxProvider
participant OpenAI_Client as OpenAI Client (MiniMax base URL)
participant ToolExecutor
Executor->>MiniMaxProvider: executeRequest(request)
MiniMaxProvider->>OpenAI_Client: chat.completions.create(payload)
OpenAI_Client-->>MiniMaxProvider: response (may include tool_calls)
alt Streaming with no tools
MiniMaxProvider-->>Executor: StreamingExecution (early return)
else Non-streaming or tools present
loop Tool iteration (≤ MAX_TOOL_ITERATIONS)
MiniMaxProvider->>ToolExecutor: executeTool(name, params) [parallel]
ToolExecutor-->>MiniMaxProvider: tool results
MiniMaxProvider->>OpenAI_Client: chat.completions.create(next payload)
OpenAI_Client-->>MiniMaxProvider: next response
end
alt Streaming requested (post-tool)
MiniMaxProvider->>OpenAI_Client: chat.completions.create(stream=true)
OpenAI_Client-->>MiniMaxProvider: stream
MiniMaxProvider-->>Executor: StreamingExecution
else Non-streaming
MiniMaxProvider-->>Executor: ProviderResponse
end
end
Last reviewed commit: d5d4e40 |
| } catch (error) { | ||
| logger.error('Error in MiniMax request:', { error }) | ||
| } |
There was a problem hiding this comment.
Silent error swallowing breaks error propagation
The inner catch block catches all errors that occur during the tool-calling loop — including network failures on subsequent model calls (e.g., line 387 minimax.chat.completions.create) — but only logs them and lets execution fall through. The caller receives a ProviderResponse with partial/empty toolCalls and no indication that an error occurred, making failures invisible to the executor.
This should re-throw (or wrap in ProviderError) so the outer catch can convert it to a proper ProviderError with timing data:
| } catch (error) { | |
| logger.error('Error in MiniMax request:', { error }) | |
| } | |
| } catch (error) { | |
| logger.error('Error in MiniMax request:', { error }) | |
| throw error | |
| } |
| const toolArgs = JSON.parse(toolCall.function.arguments) | ||
| const tool = request.tools?.find((t) => t.id === toolName) | ||
|
|
||
| if (!tool) return null | ||
|
|
There was a problem hiding this comment.
Orphaned tool_calls message when tool is not found
When the model hallucinates a tool name (not present in request.tools), this function returns null. However, the assistant message is unconditionally pushed to currentMessages (lines 303–313) with all tool_calls entries. Any tool call returning null here will not produce a corresponding tool role message.
The MiniMax API (like other OpenAI-compatible APIs) requires every element in tool_calls to have a matching tool role message with the same tool_call_id. Sending mismatched entries will cause an API error on the very next minimax.chat.completions.create call.
A simple fix is to add an error result for unrecognised tool calls rather than returning null:
| const toolArgs = JSON.parse(toolCall.function.arguments) | |
| const tool = request.tools?.find((t) => t.id === toolName) | |
| if (!tool) return null | |
| const toolArgs = JSON.parse(toolCall.function.arguments) | |
| const tool = request.tools?.find((t) => t.id === toolName) | |
| if (!tool) { | |
| const toolCallEndTime = Date.now() | |
| return { | |
| toolCall, | |
| toolName, | |
| toolParams: {}, | |
| result: { | |
| success: false, | |
| output: undefined, | |
| error: `Tool "${toolName}" not found`, | |
| }, | |
| startTime: toolCallStartTime, | |
| endTime: toolCallEndTime, | |
| duration: toolCallEndTime - toolCallStartTime, | |
| } | |
| } |
| { | ||
| id: 'MiniMax-M2.5-highspeed', | ||
| pricing: { | ||
| input: 0.6, | ||
| cachedInput: 0.03, | ||
| output: 2.4, | ||
| updatedAt: '2025-06-01', | ||
| }, | ||
| capabilities: { |
There was a problem hiding this comment.
Verify MiniMax-M2.5-highspeed pricing
MiniMax-M2.5-highspeed is priced at 2× the cost of the base model (input: $0.60/M vs $0.30/M). This is the opposite of what "highspeed" variants typically imply (usually a cheaper, faster version). Please double-check the MiniMax pricing page to confirm the values are not swapped before merging.
| <title>MiniMax</title> | ||
| <rect width='120' height='120' rx='24' fill='#1A1A2E' /> | ||
| <path | ||
| d='M30 80V40l15 20 15-20v40M70 40v40M80 40l10 20 10-20v40' |
There was a problem hiding this comment.
SVG icon path draws incomplete second "M" character
Low Severity
The third sub-path M80 40l10 20 10-20v40 in the MiniMaxIcon SVG is missing its left vertical stroke. It traces (80,40)→(90,60)→(100,40)→(100,80), rendering as a V-shape with a trailing line — not an "M". Compare with the first sub-path which correctly starts from the bottom M30 80V40l15 20 15-20v40, drawing the upward left stroke before the V-shape. The third sub-path needs to start at the bottom (e.g. M80 80V40l10 20 10-20v40) to draw a matching "M".
- Add MiniMax-M2.7 and MiniMax-M2.7-highspeed to model list - Set MiniMax-M2.7 as default model - Keep all previous models as alternatives - Update related tests
|
Someone is attempting to deploy a commit to the Sim Team on Vercel. A member of the Team first needs to authorize it. |
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
There are 2 total unresolved issues (including 1 from previous review).
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
| fill='none' | ||
| /> | ||
| </svg> | ||
| ) |
There was a problem hiding this comment.
MiniMax icon is a hand-drawn placeholder, not official
Low Severity
The MiniMaxIcon is a custom hand-drawn SVG (drawing "MiM" with stroke paths) with a hardcoded dark background rect (fill='#1A1A2E') and hardcoded stroke color (#E94560). Every other provider icon in this file uses either currentColor for theme awareness or official brand SVG paths without opaque background fills. This icon won't adapt to light/dark themes and will visually stand out from all other provider icons. The official MiniMax logo SVG is publicly available (e.g., on LobeHub, Wikimedia Commons).


Summary
Add MiniMax as a first-class LLM provider with the latest M2.7 model as default.
Changes
Why
MiniMax-M2.7 is the latest flagship model with enhanced reasoning and coding capabilities.
Testing